Background and Purpose: Colorectal cancer is a common fatal malignancy, the fourth most common cancer in men, and the third most common cancer in women worldwide. Timely detection of cancer in its early stages is essential for treating the disease. Currently, there is a lack of datasets for histopathological image segmentation of rectal cancer, which often hampers the assessment accuracy when computer technology is used to aid in diagnosis. Methods: This present study provided a new publicly available Enteroscope Biopsy Histopathological Hematoxylin and Eosin Image Dataset for Image Segmentation Tasks (EBHI-Seg). To demonstrate the validity and extensiveness of EBHI-Seg, the experimental results for EBHI-Seg are evaluated using classical machine learning methods and deep learning methods. Results: The experimental results showed that deep learning methods had a better image segmentation performance when utilizing EBHI-Seg. The maximum accuracy of the Dice evaluation metric for the classical machine learning method is 0.948, while the Dice evaluation metric for the deep learning method is 0.965. Conclusion: This publicly available dataset contained 5,170 images of six types of tumor differentiation stages and the corresponding ground truth images. The dataset can provide researchers with new segmentation algorithms for medical diagnosis of colorectal cancer, which can be used in the clinical setting to help doctors and patients.
translated by 谷歌翻译
我们为基于运动的视频框架插值提供了一种新颖的简单而有效的算法。现有的基于运动的插值方法通常依赖于预先训练的光流模型或基于U-NET的金字塔网络进行运动估计,该运动估计要么具有较大的模型大小或有限的处理复合物和大型运动案例的容量。在这项工作中,通过仔细整合了中间方向的前射击,轻质特征编码器和相关量为金字塔复发框架,我们得出一个紧凑的模型,以同时估计输入帧之间的双向运动。它的尺寸比PWC-NET小15倍,但可以更可靠,更灵活地处理具有挑战性的运动案例。基于估计的双向运动,我们向前射击输入帧及其上下文特征到中间帧,并采用合成网络来估算扭曲表示的中间帧。我们的方法在广泛的视频框架插值基准测试中实现了出色的性能。代码将很快可用。
translated by 谷歌翻译
由于深入学习技术的快速进展和大型培训集的广泛可用性,视频显着性检测模型的性能一直在稳定地改善。然而,基于深度学习的VisualAudio固定预测仍处于起步阶段。目前,只提供了一些视觉音频序列,实际固定在真实的视觉音频环境中记录。因此,在相同的视觉音频环境下回忆真实固定,它既不有效也不是必要的。为了解决这个问题,本文以弱策略的方式促进一种新的方法,以减轻对视觉音频模型培训的大规模培训集的需求。仅使用视频类别标签,我们提出了选择性类激活映射(SCAM)及其升级(诈骗+)。在空间 - 时间 - 音频环境中,前者遵循粗致细的策略来选择最辨别的区域,并且这些区域通常能够与真正的人眼固定表现出高一致性。后者用额外的多粒度感知机制配备了骗局,使整个过程更加符合真正的人类视觉系统。此外,我们从这些区域蒸馏出知识,以获得完整的新空间 - 音频(STA)固定预测(FP)网络,在视频标签不可用的情况下实现广泛的应用。不借助任何真正的人眼固定,这些STA FP网络的性能与完全监督网络的性能相当。代码和结果在https://github.com/guotaowang/stanet上公开使用。
translated by 谷歌翻译
去中心化的国家估计是GPS贬低的地区自动空中群体系统中最基本的组成部分之一,但它仍然是一个极具挑战性的研究主题。本文提出了Omni-swarm,一种分散的全向视觉惯性-UWB状态估计系统,用于解决这一研究利基市场。为了解决可观察性,复杂的初始化,准确性不足和缺乏全球一致性的问题,我们在Omni-warm中引入了全向感知前端。它由立体宽型摄像机和超宽带传感器,视觉惯性探测器,基于多无人机地图的本地化以及视觉无人机跟踪算法组成。前端的测量值与后端的基于图的优化融合在一起。所提出的方法可实现厘米级的相对状态估计精度,同时确保空中群中的全球一致性,这是实验结果证明的。此外,在没有任何外部设备的情况下,可以在全面的无人机间碰撞方面支持,表明全旋转的潜力是自动空中群的基础。
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译